Step size adaptation in first-order method for stochastic strongly convex programming

نویسنده

  • Peng Cheng
چکیده

where f(.) : Γf → R is an unknown, not necessarily smooth, multivariate and λ-strongly convex function, with Γf its convex definition domain. The algorithm is not allowed to accurately sample f(.) by any means since f(.) itself is unknown. Instead the algorithm can call stochastic oracles ω̃(.) at chosen points x̃1, . . . , x̃n, which are unbiased and independent probabilistic estimators of the first-order local information of f(.) in the vicinity of each xi:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Global convergence of an inexact interior-point method for convex quadratic‎ ‎symmetric cone programming‎

‎In this paper‎, ‎we propose a feasible interior-point method for‎ ‎convex quadratic programming over symmetric cones‎. ‎The proposed algorithm relaxes the‎ ‎accuracy requirements in the solution of the Newton equation system‎, ‎by using an inexact Newton direction‎. ‎Furthermore‎, ‎we obtain an‎ ‎acceptable level of error in the inexact algorithm on convex‎ ‎quadratic symmetric cone programmin...

متن کامل

A TRUST-REGION SEQUENTIAL QUADRATIC PROGRAMMING WITH NEW SIMPLE FILTER AS AN EFFICIENT AND ROBUST FIRST-ORDER RELIABILITY METHOD

The real-world applications addressing the nonlinear functions of multiple variables could be implicitly assessed through structural reliability analysis. This study establishes an efficient algorithm for resolving highly nonlinear structural reliability problems. To this end, first a numerical nonlinear optimization algorithm with a new simple filter is defined to locate and estimate the most ...

متن کامل

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs. Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop c...

متن کامل

A Variance Reduced Stochastic Newton Method

Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...

متن کامل

Barzilai-Borwein Step Size for Stochastic Gradient Descent

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step size by hand. Apparently, these two approaches can be time consum...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1110.3001  شماره 

صفحات  -

تاریخ انتشار 2011